尽管近期因因果推断领域的进展,迄今为止没有关于从观察数据的收集治疗效应估算的方法。对临床实践的结果是,当缺乏随机试验的结果时,没有指导在真实情景中似乎有效的指导。本文提出了一种务实的方法,以获得从观察性研究的治疗效果的初步但稳健地估算,为前线临床医生提供对其治疗策略的信心程度。我们的研究设计适用于一个公开问题,估算Covid-19密集护理患者的拳击机动的治疗效果。
translated by 谷歌翻译
Efficient and robust control using spiking neural networks (SNNs) is still an open problem. Whilst behaviour of biological agents is produced through sparse and irregular spiking patterns, which provide both robust and efficient control, the activity patterns in most artificial spiking neural networks used for control are dense and regular -- resulting in potentially less efficient codes. Additionally, for most existing control solutions network training or optimization is necessary, even for fully identified systems, complicating their implementation in on-chip low-power solutions. The neuroscience theory of Spike Coding Networks (SCNs) offers a fully analytical solution for implementing dynamical systems in recurrent spiking neural networks -- while maintaining irregular, sparse, and robust spiking activity -- but it's not clear how to directly apply it to control problems. Here, we extend SCN theory by incorporating closed-form optimal estimation and control. The resulting networks work as a spiking equivalent of a linear-quadratic-Gaussian controller. We demonstrate robust spiking control of simulated spring-mass-damper and cart-pole systems, in the face of several perturbations, including input- and system-noise, system disturbances, and neural silencing. As our approach does not need learning or optimization, it offers opportunities for deploying fast and efficient task-specific on-chip spiking controllers with biologically realistic activity.
translated by 谷歌翻译
Offline reinforcement learning (RL) is suitable for safety-critical domains where online exploration is too costly or dangerous. In safety-critical settings, decision-making should take into consideration the risk of catastrophic outcomes. In other words, decision-making should be risk-sensitive. Previous works on risk in offline RL combine together offline RL techniques, to avoid distributional shift, with risk-sensitive RL algorithms, to achieve risk-sensitivity. In this work, we propose risk-sensitivity as a mechanism to jointly address both of these issues. Our model-based approach is risk-averse to both epistemic and aleatoric uncertainty. Risk-aversion to epistemic uncertainty prevents distributional shift, as areas not covered by the dataset have high epistemic uncertainty. Risk-aversion to aleatoric uncertainty discourages actions that may result in poor outcomes due to environment stochasticity. Our experiments show that our algorithm achieves competitive performance on deterministic benchmarks, and outperforms existing approaches for risk-sensitive objectives in stochastic domains.
translated by 谷歌翻译
Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement. Their success hinges on the fact that the underlying physical phenomena are continuous. For inherently discrete and categorical data such as language, various diffusion-inspired alternatives have been proposed. However, the continuous nature of diffusion models conveys many benefits, and in this work we endeavour to preserve it. We propose CDCD, a framework for modelling categorical data with diffusion models that are continuous both in time and input space. We demonstrate its efficacy on several language modelling tasks.
translated by 谷歌翻译
Can continuous diffusion models bring the same performance breakthrough on natural language they did for image generation? To circumvent the discrete nature of text data, we can simply project tokens in a continuous space of embeddings, as is standard in language modeling. We propose Self-conditioned Embedding Diffusion, a continuous diffusion mechanism that operates on token embeddings and allows to learn flexible and scalable diffusion models for both conditional and unconditional text generation. Through qualitative and quantitative evaluation, we show that our text diffusion models generate samples comparable with those produced by standard autoregressive language models - while being in theory more efficient on accelerator hardware at inference time. Our work paves the way for scaling up diffusion models for text, similarly to autoregressive models, and for improving performance with recent refinements to continuous diffusion.
translated by 谷歌翻译
Hamiltonian Monte Carlo(HMC)是Markov链算法,用于从具有密度$ e^{ - f(x)} $的高维分布中进行采样,可访问$ f $的梯度。一种特殊的感兴趣的情况是带有协方差矩阵$ \ sigma $的$ d $二维高斯分布,在这种情况下$ f(x)= x^\ top \ top \ sigma^{ - 1} x $。我们表明,HMC可以使用$ \ wideTilde {o}(\ sqrt {\ kappa} d^{1/4} \ log(1/\ varepsilon),使用$ \ varepsilon $ -close在总变化距离中取样。)$渐变查询,其中$ \ kappa $是$ \ sigma $的条件号。我们的算法对哈密顿动力学使用了长时间和随机的整合时间。这与最近的结果(并受到了)的形成对比,该结果给出了$ \ widetilde \ omega(\ kappa d^{1/2})$查询的HMC较低限制,即使是高斯案例,也有固定的集成时间。
translated by 谷歌翻译
由于结构化数据通常不足,因此在开发用于临床信息检索和决策支持系统模型时,需要从电子健康记录中的自由文本中提取标签。临床文本中最重要的上下文特性之一是否定,这表明没有发现。我们旨在通过比较荷兰临床注释中的三种否定检测方法来改善标签的大规模提取。我们使用Erasmus医疗中心荷兰临床语料库比较了基于ContextD的基于规则的方法,即使用MEDCAT和(Fineted)基于Roberta的模型的BilstM模型。我们发现,Bilstm和Roberta模型都在F1得分,精度和召回方面始终优于基于规则的模型。此外,我们将每个模型的分类错误系统地分类,这些错误可用于进一步改善特定应用程序的模型性能。在性能方面,将三个模型结合起来并不有益。我们得出的结论是,尤其是基于Bilstm和Roberta的模型在检测临床否定方面非常准确,但是最终,根据手头的用例,这三种方法最终都可以可行。
translated by 谷歌翻译
城市规划师越来越多地使用基于深度学习的计算机视觉模型来支持塑造城市环境的决策。这样的模型预测人们如何从例如它的安全或美丽。但是,深度学习模型的黑盒本质阻碍了城市规划师,以了解哪些景观对象有助于特别高质量或低质量的城市空间感知。这项研究调查了如何使用计算机视觉模型来提取有关人们对城市空间的看法的相关政策信息。为此,我们训练了两个广泛使用的计算机视觉架构。卷积神经网络和变压器,并应用Gradcam(一种众所周知的可解释的AI技术),以突出图像区域对模型的预测很重要。使用这些GradCAM可视化,我们手动注释与模型的感知预测相关的对象。结果,我们能够发现以前研究中用于注释的当前对象检测模型中未表示的新对象。此外,我们的方法论结果表明,变压器架构更适合与GARGCAM技术结合使用。代码可在GitHub上找到。
translated by 谷歌翻译
摄像机传感器越来越多地与机器学习相结合,以执行各种任务,例如智能监视。由于其计算复杂性,这些机器学习算法中的大多数都被卸载到云中进行处理。但是,用户越来越关注第三方云提供商诸如功能蠕变和恶意使用之类的隐私问题。为了减轻这一点,我们提出了一个基于边缘的过滤阶段,该阶段在将传感器数据传输到云之前,该阶段去除对隐私敏感的属性。我们使用最先进的图像操纵技术,以利用删除表示形式来实现隐私过滤。我们定义选择加入和退出过滤器操作,并评估其从面部图像过滤私人属性的有效性。此外,我们研究了自然发生的相关性和剩余信息对过滤的影响。我们发现结果有希望,并相信这会进一步研究如何将图像操纵用于隐私保护。
translated by 谷歌翻译
基于机器学习的模型最近获得了吸引力,作为通过构建提供快速准确的性能预测的模型来克服FPGA下游实现过程的一种方式。但是,这些模型有两个主要局限性:(1)培训需要大量数据(从FPGA合成和实施报告中提取的功能),这是由于耗时的FPGA设计周期而具有成本范围的; (2)针对特定环境训练的模型无法预测新的未知环境。在云系统中,访问平台通常是昂贵的,ML模型的数据收集可以显着增加系统的总成本所有权(TCO)。为了克服这些限制,我们提出了Leaper,这是一种基于FPGA的基于转移学习的方法,可将现有的基于ML的模型适应新的,未知的环境,以提供快速准确的性能和资源利用预测。实验结果表明,当我们使用转移的模型进行5次学习的云环境中的预测并将设计空间探索时间从天数到几个小时,我们的方法平均提供了85%的精度。
translated by 谷歌翻译